5 research outputs found

    Usable Security. A Systematic Literature Review

    Get PDF
    Usable security involves designing security measures that accommodate users’ needs and behaviors. Balancing usability and security poses challenges: the more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. Numerous studies have addressed this balance. These studies, spanning psychology and computer science/engineering, contribute diverse perspectives, necessitating a systematic review to understand strategies and findings in this area. This systematic literature review examined articles on usable security from 2005 to 2022. A total of 55 research studies were selected after evaluation. The studies have been broadly categorized into four main clusters, each addressing different aspects: (1) usability of authentication methods, (2) helping security developers improve usability, (3) design strategies for influencing user security behavior, and (4) formal models for usable security evaluation. Based on this review, we report that the field’s current state reveals a certain immaturity, with studies tending toward system comparisons rather than establishing robust design guidelines based on a thorough analysis of user behavior. A common theoretical and methodological background is one of the main areas for improvement in this area of research. Moreover, the absence of requirements for Usable security in almost all development contexts greatly discourages implementing good practices since the earlier stages of development

    The cybersecurity awareness Inventory (CAIN). Early phases of development of a tool for assessing cybersecurity knowledge based on the ISO/IEC 27032

    Get PDF
    Knowledge of possible cyber threats as well as awareness of appropriate security measures plays a crucial role in the ability of individuals to not only discriminate between an innocuous versus a dangerous cyber event, but more importantly to initiate appropriate cybersecurity behaviors. The purpose of this study was to construct a Cybersecurity Awareness INventory (CAIN) to be used as an instrument to assess users’ cybersecurity knowledge by providing a proficiency score that could be correlated with cyber security behaviors. A scale consisting of 46 items was derived from ISO/IEC 27032. The questionnaire was administered to a sample of college students (N = 277). Based on cybersecurity behaviors reported to the research team by the college’s IT department, each participant was divided into three groups according to the risk reports they received in the past nine months (no risk, low risk, and medium risk). The ANOVA results showed a statistically significant difference in CAIN scores between those in the no risk and medium-risk groups; as expected, CAIN scores were lower in the medium-risk group. The CAIN has the potential to be a useful assessment tool for cyber training programs as well as future studies investigating individuals’ vulnerability to cyberthreats

    Getting Rid of the Usability/Security Trade-Off: A Behavioral Approach

    No full text
    The usability/security trade-off indicates the inversely proportional relationship that seems to exist between usability and security. The more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. So far, attempts to reduce the gap between usability and security have been unsuccessful. In this paper, we offer a theoretical perspective to exploit this tradeoff rather than fight it, as well as a practical approach to the use of contextual improvements in system usability to reward secure behavior. The theoretical perspective, based on the concept of reinforcement, has been successfully applied to several domains, and there is no reason to believe that the cybersecurity domain will represent an exception. Although the purpose of this article is to devise a research agenda, we also provide an example based on a single-case study where we apply the rationale underlying our proposal in a laboratory experiment

    A lack of focus, not task avoidance, makes the difference: work routines in procrastinators and non-procrastinators

    Get PDF
    Procrastination may be seen as the outcome of a learning history of delaying the onset of task execution and its completion, both in terms of time and effort. In this study, we examined the performance of 55 university students who carried out two writing tasks consisting of summarizing two academic papers, each within a different time slot (i.e., five vs. three days to complete). The two assignments were part of the class activity and were perceived by participants as homogeneous in terms of text appreciation and difficulty, therefore making the two conditions comparable. The Pure Procrastination Scale was used to categorize subjects as high and low procrastinators, and to compare their performances. Results show that students who report more procrastination behaviors tend to increase their productivity as the deadline approaches, while low procrastinators are more productive throughout the time at their disposal, with peak activity during the intermediate day. Such a strategy was consistent across two deadlines (five vs. three days), and the difference between the two subgroups can be ascribed to the task-oriented coping style, which seems to be lacking in high-procrastinators

    Usability Evaluations Employing Online Panels Are Not Bias-Free

    No full text
    A growing trend in UX research is the use of Online Panels (OPs), namely people enrolled in a web platform who have agreed to participate regularly in online studies and/or in the execution of simple and repetitive operations. The effect of the participation of such “professional respondents” on data quality has been questioned in a variety of fields (e.g., Psychology and Marketing). Notwithstanding the increasing use of OPs in UX research, there is a lack of studies investigating the bias affecting usability assessments provided by this type of respondents. In this paper we have addressed this issue by comparing the usability evaluations provided by professional respondents commonly involved in debugging activities, non-professional respondents, and naive people not belonging to any OP. In a set of three studies, we have addressed both the effect of expertise and type of task (debugging vs. browsing) on the usability assessments. A total of 138 individuals participated in these studies. Results showed that individuals who performed the debugging test provided more positive usability ratings regardless of their skills, conversely, professional respondents provided more severe and critical ratings of perceived usability than non-professionals. Finally, the comparison between the online panelists and naive users allowed us to better understand whether professional respondents can be involved in usability evaluations without jeopardizing them
    corecore